Goto

Collaborating Authors

 self-supervised task



Augmentation-Aware Self-Supervision for Data-Efficient GAN Training

Neural Information Processing Systems

We further encourage the generator to adversarially learn from the self-supervised discriminator by generating augmentation-predictable real and not fake data.


Self-Supervised GANs with Label Augmentation

Neural Information Processing Systems

Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing a stationary learning environment. However, the separate self-supervised tasks in existing self-supervised GANs cause a goal inconsistent with generative modeling due to the fact that their self-supervised classifiers are agnostic to the generator distribution. To address this problem, we propose a novel self-supervised GAN that unifies the GAN task with the self-supervised task by augmenting the GAN labels (real or fake) via self-supervision of data transformation. Specifically, the original discriminator and self-supervised classifier are unified into a label-augmented discriminator that predicts the augmented labels to be aware of both the generator distribution and the data distribution under every transformation, and then provide the discrepancy between them to optimize the generator. Theoretically, we prove that the optimal generator could converge to replicate the real data distribution. Empirically, we show that the proposed method significantly outperforms previous self-supervised and data augmentation GANs on both generative modeling and representation learning across benchmark datasets.








A Proofs

Neural Information Processing Systems

GANs, we need to rewrite the objective functions that are easy to calculate derivatives. Proposition 2. F or any continuous and differentiable function f whose domain is X, we have: E Readers are encouraged to refer to the original proof in [57] for more details. Theorem 2. Given the optimal classifier Please see Appendix A.2 for details. Proposition 1. F or any fixed generator, given a data Theorem 3. The objective function for the generator of SSGAN-LA, given the optimal label-augmented discriminator, boils down to: min Theorem 4. At the equilibrium point of DAGAN, the optimal generator implies We first prove the first sentence in this Theorem. We then prove the second sentence in this Theorem.